8 research outputs found

    A Temporal extension of Prolog

    Get PDF
    AbstractTemporal Prolog, a temporal logic extension of PROLOG, is presented. The primary criterion for the model selection has been its natural embedment into the logic programming paradigm. Under strong efficiency constraints, a first-order “reified” logic has been taken as a basis for the implementation. Allen's temporal constraint algorithm has been extended for treatment of retractable constraints. Their embedment into Temporal Prolog can be viewed as an instance of the Constraint Logic Programming paradigm. An example inspired by K. Forbus's Qualitative Process Theory illustrates how qualitative simulation and related tasks can be formulated in Temporal Prolog in a transparent and declarative way

    Representational Capacity of Deep Neural Networks -- A Computing Study

    Full text link
    There is some theoretical evidence that deep neural networks with multiple hidden layers have a potential for more efficient representation of multidimensional mappings than shallow networks with a single hidden layer. The question is whether it is possible to exploit this theoretical advantage for finding such representations with help of numerical training methods. Tests using prototypical problems with a known mean square minimum did not confirm this hypothesis. Minima found with the help of deep networks have always been worse than those found using shallow networks. This does not directly contradict the theoretical findings---it is possible that the superior representational capacity of deep networks is genuine while finding the mean square minimum of such deep networks is a substantially harder problem than with shallow ones

    Unsupervised learning by backward inhibition

    No full text
    Backward inhibition in a two-layer connectionist network can be used as an alternative to, or an enhancement of, the competitive model for unsupervised learning. Two feature discovery algorithms based on backward inhibition are presented. It is shown that they are superior to the competitive feature discovery algorithm in feature independence and controllable grain. Moreover, the representation in the feature layer is distributed, and a certain "classification hierarchy " is defined by th

    Multivariate distribution models with generalized hyperbolic margins

    No full text
    Multivariate generalized hyperbolic distributions represent an attractive family of distributions (with exponentially decreasing tails) for multivariate data modelling. However, in a limited data environment, robust and fast estimation procedures are rare. In this paper we propose an alternative class of multivariate distributions (with exponentially decreasing tails) belonging to affine-linear transformed random vectors with stochastically independent and generalized hyperbolic marginals. The latter distributions possess good estimation properties and have attractive dependence structures which we explore in detail. In particular, dependencies of extreme events (tail dependence) can be modelled within this class of multivariate distributions. In addition we introduce the necessary estimation and random-number generation procedures. Various advantages and disadvantages of both types of distributions are discussed and illustrated via a simulation study

    generalized

    No full text
    R. Schmidt et al., 2006, Multivariate distribution models with generalized hyperbolic margins

    Graphics goodies #1—a filling algorithm for arbitrary regions

    No full text
    corecore